Skip to main content
Glama

Natural Language Query

Process natural language queries to retrieve project and task information from the Project Tracker MCP Server. Ask questions like "Show me John's overdue tasks" to get relevant data.

Instructions

Process natural language queries with enhanced entity discovery and intelligent analysis

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesNatural language query (e.g., "Show me John's overdue tasks")

Implementation Reference

  • MCP tool definition including name, Zod input schema, description, and handler function that executes the natural language query processing logic
    export const naturalLanguageQueryTool = {
      name: 'Natural Language Query',
      description:
        'Process natural language queries with enhanced entity discovery and intelligent analysis',
      parameters: z.object({
        prompt: z
          .string()
          .min(5, 'Prompt must be at least 5 characters')
          .max(500, 'Prompt must be less than 500 characters')
          .describe(
            'Natural language query (e.g., "Show me Sarah\'s overdue tasks", "Analyze project health")',
          ),
      }),
      handler: async ({ prompt }: { prompt: string }) => {
        const processor = new NaturalLanguageQueryProcessor(process.env.MCP_DEBUG_MODE === 'true');
        const result = await processor.processQuery(prompt);
    
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify(result, null, 2),
            },
          ],
        };
      },
    };
  • Zod-based input schema definition for the tool parameters
    parameters: z.object({
      prompt: z
        .string()
        .min(5, 'Prompt must be at least 5 characters')
        .max(500, 'Prompt must be less than 500 characters')
        .describe(
          'Natural language query (e.g., "Show me Sarah\'s overdue tasks", "Analyze project health")',
        ),
    }),
  • Registration of the Natural Language Query tool in the mcpTools array exported for use by the MCP server
    import { naturalLanguageQueryTool } from './natural_language_query';
    import { workloadAnalysisTool } from './workload_analysis';
    import { riskAssessmentTool } from './risk_assessment';
    
    // EXPANSION: Consolidated tool registry for MCP server
    export const mcpTools = [naturalLanguageQueryTool, workloadAnalysisTool, riskAssessmentTool];
  • Tool-specific schema provided in server for tool listing response (fallback/JSON schema)
    case 'Natural Language Query':
      return {
        type: 'object',
        properties: {
          prompt: {
            type: 'string',
            description: 'Natural language query (e.g., "Show me John\'s overdue tasks")',
            minLength: 5,
            maxLength: 500,
          },
        },
        required: ['prompt'],
      };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'enhanced entity discovery and intelligent analysis,' which hints at processing capabilities, but fails to describe key behavioral traits such as whether it's read-only or mutative, authentication needs, rate limits, or what the output looks like (e.g., structured data, analysis results). This leaves significant gaps for an agent to understand how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'Process natural language queries with enhanced entity discovery and intelligent analysis.' It is front-loaded with the core purpose and includes no unnecessary words, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity implied by 'enhanced entity discovery and intelligent analysis,' the lack of annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., analysis results, entities discovered), behavioral constraints, or how it integrates with sibling tools. This leaves the agent with insufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'prompt' parameter clearly documented as a natural language query with length constraints. The description adds minimal value beyond this, as it doesn't provide additional context like examples of effective prompts or semantic nuances. With high schema coverage, the baseline score of 3 is appropriate, as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Process natural language queries with enhanced entity discovery and intelligent analysis.' It specifies the verb ('process') and resource ('natural language queries'), and mentions key capabilities ('entity discovery', 'intelligent analysis'). However, it doesn't explicitly differentiate from sibling tools like 'Risk Assessment' or 'Workload Analysis' in terms of when to use each, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools ('Risk Assessment' and 'Workload Analysis'). It doesn't specify contexts, exclusions, or alternatives. The only implied usage is for natural language queries, but this is too vague for effective tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jatinderbhola/mcp-taskflow-tracker-api'

If you have feedback or need assistance with the MCP directory API, please join our Discord server